16 research outputs found

    Repairing misclassifications in neural networks using limited data

    Get PDF
    We present a novel and computationally efficient method for repairing a feed-forward neural network with respect to a finite set of inputs that are misclassified. The method assumes no access to the training set. We present a formal characterisation for repairing the neural network and study its resulting properties in terms of soundness and minimality. We introduce a gradient-based algorithm that performs localised modifications to the network's weights such that misclassifications are repaired while marginally affecting network accuracy on correctly classified inputs. We introduce an implementation, I-REPAIR, and show it is able to repair neural networks while reducing accuracy drops by up to 90% when compared to other state-of-the-art approaches for repair

    Robustness Verification of Support Vector Machines

    Get PDF
    We study the problem of formally verifying the robustness to adversarial examples of support vector machines (SVMs), a major machine learning model for classification and regression tasks. Following a recent stream of works on formal robustness verification of (deep) neural networks, our approach relies on a sound abstract version of a given SVM classifier to be used for checking its robustness. This methodology is parametric on a given numerical abstraction of real values and, analogously to the case of neural networks, needs neither abstract least upper bounds nor widening operators on this abstraction. The standard interval domain provides a simple instantiation of our abstraction technique, which is enhanced with the domain of reduced affine forms, which is an efficient abstraction of the zonotope abstract domain. This robustness verification technique has been fully implemented and experimentally evaluated on SVMs based on linear and nonlinear (polynomial and radial basis function) kernels, which have been trained on the popular MNIST dataset of images and on the recent and more challenging Fashion-MNIST dataset. The experimental results of our prototype SVM robustness verifier appear to be encouraging: this automated verification is fast, scalable and shows significantly high percentages of provable robustness on the test set of MNIST, in particular compared to the analogous provable robustness of neural networks

    OMTPlan: a tool for optimal planning modulo theories

    No full text
    OMTPlan is a Python platform for optimal planning in numeric domains via reductions to Satis - ability Modulo Theories (SMT) and Optimization Modulo Theories (OMT). Currently, OMTPlan supports the expressive power of PDDL2.1 level 2 and features procedures for both satis cing and optimal planning. OMTPlan provides an open, easy to extend, yet e cient implementation framework. These goals are achieved through a modular design and the extensive use of state-of-the-art systems for SMT/OMT solving

    Robust explanations for human-neural multi-agent systems with formal verification

    No full text
    The quality of explanations in human-agent interactions is fundamental to the development of trustworthy AI systems. In this paper we study the problem of generating robust contrastive explanations for human-neural multi-agent systems and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations equipped with formal robustness certificates. We present an implementation and evaluate the effectiveness of the approach on two case studies involving neural agents trained on credit scoring and traffic sign recognition tasks

    Towards robust contrastive explanations for human-neural multi-agent systems

    No full text
    Generating explanations of high quality is fundamental to the development of trustworthy human-AI interactions. We here study the problem of generating contrastive explanations with formal robustness guarantees. We formalise a new notion of robustness and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations augmented with provable robustness certificates. We present an implementation and evaluate the utility of the approach on two case studies concerning neural agents trained on credit scoring and image classification tasks

    Verification-friendly networks: the case for parametric ReLUs

    No full text
    It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While progress has been made in the area of verification of neural networks, present techniques still do not scale to large ReLU-based neural networks used in many applications. In this paper we show that considerable progress can be made by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with VeriNet, a SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approach improves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks

    Counterfactual explanations and model multiplicity: a relational verification view

    No full text
    We study the interplay between counterfactual explanations and model multiplicity in the context of neural network clas- sifiers. We show that current explanation methods often pro- duce counterfactuals whose validity is not preserved under model multiplicity. We then study the problem of generating counterfactuals that are guaranteed to be robust to model multiplicity, characterise its complexity and propose an approach to solve this problem using ideas from relational verification

    Improving reliability of myocontrol using formal verification

    Get PDF
    In the context of assistive robotics, myocontrol is one of the so-far unsolved problems of upper-limb prosthetics. It consists of swiftly, naturally, and reliably converting biosignals, non-invasively gathered from an upper-limb disabled subject, into control commands for an appropriate self-powered prosthetic device. Despite decades of research, traditional surface electromyography cannot yet detect the subject's intent to an acceptable degree of reliability, that is, enforce an action exactly when the subject wants it to be enforced.. In this paper, we tackle one such kind of mismatch between the subject's intent and the response by the myocontrol system, and show that formal verification can indeed be used to mitigate it. Eighteen intact subjects were engaged in two target achievement control tests in which a standard myocontrol system was compared to two “repaired” ones, one based on a non-formal technique, thus enforcing no guarantee of safety, and the other using the satisfiability modulo theories (SMT) technology to rigorously enforce the desired property. The experimental results indicate that both repaired systems exhibit better reliability than the non-repaired one. The SMT-based system causes only a modest increase in the required computational resources with respect to the non-formal technique; as opposed to this, the non-formal technique can be easily implemented in existing myocontrol systems, potentially increasing their reliability

    Formalising the robustness of counterfactual explanations for neural networks

    No full text
    The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call ∆-robustness. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust

    SMT-based Planning for Robots in Smart Factories

    No full text
    Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI planning can play a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. We explore the use of SMT at the core of planning techniques to deal with real-world scenarios in the emerging smart factory paradigm. We present special-purpose and general-purpose algorithms, based on current automated reasoning technology and designed to tackle complex application domains. We evaluate their effectiveness and respective merits on a logistic scenario, also extending the comparison to other state-of-the-art task planners
    corecore